last pdf update generated ->  27 mai, 2024 - 12:01:38

Introduction

A behavioral study of foraging

The data were acquired by JS, CG, and CREW at Oxford univ and at Inserm Lyon. In Lyon, Donut was tested at U1028 and Homer and Dali at U1208. The animals were tested everyday with different versions of the apparatus. The testing was done in animal housing using the 1st design realized by JS:

On this board the 25 locations are numbered from 1 to 25 from the upper left corner to the lower right corner, going from left to right.

Data loading and formating

Data files include all data tested in monkeys in Oxford and in Lyon.

This Markdown deals mostly with data from HOMER and DONUT.1

session types are initially “0” for control (transparent doors) and “1” TEST with opaque blue doors, but we use the label Clear vs BLue in figures and analyses. Codes used for target chosen and repeats or miss reflect the position of the choice (location of the hole selected from 1 TO 25 top to bottom) , if negative value then it’s a repeat, if 999 then it’s a pause in the task, if it’s a value between 100 and 2500 then it is a mistake (the animal tried but missed the reward) the number /100 giving the location.

!!!! ATTENTION : new data used on the 4.1.2024: there was a problem in the list of session for Homer, previous analyses for that monkey must not be considered.!!!!

Main general Plots

Let first see descriptive graphs for all sessions per monkey. The figures show for each monkey the choices (location of choice on board) selected by the animal trial by trial (chronological order from left to right). Green dots represent correct choices i.e. location chosen for the 1st time in session and with correct pick up of the reward. Blue dots represent returns to previously chosen location. These are presented as negative values to show their time course independently for correct choices. Orange is when reward is missed. Clear (transparent doors) and Blue (blue doors) sessions are presented separately First sample sessions:
General figure HOMER

General figure HOMER

General figure HOMER

General figure HOMER

Then all data overlap across sessions to see the tendencies. In particular one can see the positive and negative trends that reflect monkeys choosing holes from top to bottom or bottom to top.Homer and Donut have different prefered directions but this is due to the different position of the setup in the housing.

General figure DONUT

General figure DONUT

General figure DONUT

General figure DONUT

Summary

The data are then summarized in terms of frequency of each trial (choice) type: Correct, Miss, Repeat.

Summary all trials

Summary all trials

Below the average number of each trial type for the different ‘portes’ (door) conditions.

We do not look at injections yet, this will be done later in the statistical anayses (DCZ vs Sham). See for all 8 monkeys or for Donut and Homer that there is a main effect of doors in particular on the number of repeats. Which makes sense because monkeys in blue condition (compared to clear) monkeys need to rely on memory to avoid repeating ; which obviously they don’t really succeed. We see later the #repeat is a very relevant parameter.

##Stats We perform a logistic regression on the door effect for each monkey separately and test whether it influences the number of trial types - Still excluding DCZ sessions

## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Homer") :
## l'argument supplémentaire 'singe' sera ignoré

## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Homer") :
## l'argument supplémentaire 'singe' sera ignoré
## 
## Call:
## glm(formula = trial ~ choice_type * portes, family = "poisson", 
##     data = subset(agg.data4BnoDCZ, singe = "Homer"))
## 
## Coefficients:
##                               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                    3.03188    0.01494 202.909  < 2e-16 ***
## choice_typeMiss               -1.26213    0.03285 -38.416  < 2e-16 ***
## choice_typeRepeat              0.40542    0.01933  20.978  < 2e-16 ***
## portesclear                    0.06618    0.02758   2.399   0.0164 *  
## choice_typeMiss:portesclear   -0.60045    0.07411  -8.102 5.42e-16 ***
## choice_typeRepeat:portesclear -1.61850    0.07839 -20.647  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 10446.4  on 816  degrees of freedom
## Residual deviance:  4795.5  on 811  degrees of freedom
## AIC: 8302.8
## 
## Number of Fisher Scoring iterations: 5
## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Donut") :
## l'argument supplémentaire 'singe' sera ignoré
## Warning: Dans subset.data.frame(agg.data4BnoDCZ, singe = "Donut") :
## l'argument supplémentaire 'singe' sera ignoré
## 
## Call:
## glm(formula = trial ~ choice_type * portes, family = "poisson", 
##     data = subset(agg.data4BnoDCZ, singe = "Donut"))
## 
## Coefficients:
##                               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                    3.03188    0.01494 202.909  < 2e-16 ***
## choice_typeMiss               -1.26213    0.03285 -38.416  < 2e-16 ***
## choice_typeRepeat              0.40542    0.01933  20.978  < 2e-16 ***
## portesclear                    0.06618    0.02758   2.399   0.0164 *  
## choice_typeMiss:portesclear   -0.60045    0.07411  -8.102 5.42e-16 ***
## choice_typeRepeat:portesclear -1.61850    0.07839 -20.647  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 10446.4  on 816  degrees of freedom
## Residual deviance:  4795.5  on 811  degrees of freedom
## AIC: 8302.8
## 
## Number of Fisher Scoring iterations: 5

##Exploration strategies

One future interesting set of analyses we can do regards the strategies of exploration: how animals scan through the setup , and then of course how they forget and repeat choices. Needs to be quantified to be used when comparing ON/OFF DCZ sessions.

One thing we can look at is the spatial variance between successive choices (here I do not differentiate Miss, Correct or repeat).

The figures below show the distributions of euclidean distances between 2 successive choices. The absolute distance between 2 holes (vertically and horizontally is 6.5cm). So we can see harmonics at 6.5 cm approximately in the distributions. Something quite obvious is that the harmonic is stronger in TEST than in CONTROL.

Note that here every choice is counted even repeats.

//////////// ATTENTION: here we will select only sessions that are labelled no, sham or DCZ (i.e. >=23 for Homer and >=24 for Donut ) ///////////

The distribution of distances between choices is of a particular form, somewhat LogNormal. We can look at this distributions depending on conditions and also compare with a random sampling of distances. Let’s first look at this accross the 2 monkeys.

The red shows distributions for monkeys, and blue shows a random sampling of 10000 distances.

Seperated for the 2 animals for Clear and Blue conditions we can see (below) that the distributions and the oscillation effects are stronger on Blue compared to Clear.

Spatial strategy. Distributions of euclidean distances

Spatial strategy. Distributions of euclidean distances

We test the difference in distributions between clear and blue for the 2 animals separately - we exclude DCZ sessions:

## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Homer" & data4B$portes == "clear" & data4B$Injection != "DCZ"] and data4B$distloc[data4B$singe == "Homer" & data4B$portes == "blue" & data4B$Injection != "DCZ"]
## D = 0.12786, p-value = 0.01039
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Donut" & data4B$portes == "clear" & data4B$Injection != "DCZ"] and data4B$distloc[data4B$singe == "Donut" & data4B$portes == "blue" & data4B$Injection != "DCZ"]
## D = 0.23403, p-value < 2.2e-16
## alternative hypothesis: two-sided

The KS tests indicate a difference between the 2 distributions (Clear and Blue) for both monkeys.

The strength of the ‘harmonic’ could be a marker of the two strategies used in the different sessions. The heavier harmonics in TEST could reflect an increased number of jumps between distant targets, whereas in CONTROL the animal would be more attracted to the visible reward which is just closeby to the current choice, hence proportionally more cases in which the animal choose the target just next the current one (6.5 cm distance). But we would need more control sessions to be sure.

Subset of trials

One observation is that monkeys often behave in a somewhat more controlled, organized, manner at the beginning of a session and then choices become more dispersed. This could correlate approximately with the completion of the task (i.e. having gone through all locations). So here we seperate the first 25 from the oher trials in 2 subset (25 corresponding to the number of locations on the setup).

Again here we remove DCZ (DCZ is taken into account in statistical analyses below)

There is obviously a lot of repeats (checks?) after 25 when the animals continue trying to get rewards, and of course especially in the blue sessions. And Donut is a particularly good checker…

##The summary:

Below the average number of each trial type for the different conditions of injection (here again for the first 25 trials) :

Injection type 25 trials

Injection type 25 trials

##Spatial strategy:

Spatial organization in 2D space

The tendencies for choosing some locations rather than others can reveal spatial biases. SO we use here a 2D density mapping of choices to look at that.

Let’s look also at the patterns of choices separating between before and after 25:

STATISTICAL ANALYSES on DCZ vs Sham sessions

(Note we have no injection pre-surgery )

Here are the analyses and description of data for the 2 main types of sessions used on DCZ conditions. We subset the data for just the 2 monkeys and the 2 session types with an injection (sham and DCZ). We will also go through some more measures:

ATTENTION we remove the CONTROL sessions!!!!

## , ,  = Homer
## 
##        
##         sham DCZ
##   clear    6   8
##   blue     7   9
## 
## , ,  = Donut
## 
##        
##         sham DCZ
##   clear    7  10
##   blue    10  11

##Descriptions of sessions

Here we answer a few general questions on the sessions, choices, repeats etc..

First, were there more trials (choices+misses+repeats) in DCZ compared to sham sessions?

Summary 25 trials

Summary 25 trials

## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.06027    0.08839  34.623   <2e-16 ***
## InjectionDCZ             0.19301    0.11244   1.716   0.0861 .  
## portesblue               0.08142    0.11826   0.688   0.4912    
## InjectionDCZ:portesblue  0.22698    0.14823   1.531   0.1257    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 62.688  on 29  degrees of freedom
## Residual deviance: 30.035  on 26  degrees of freedom
## AIC: 191.34
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B, singe == "Donut"))
## 
## Coefficients:
##                          Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.347395   0.070888  47.221   <2e-16 ***
## InjectionDCZ            -0.001006   0.092446  -0.011    0.991    
## portesblue               0.909635   0.080259  11.334   <2e-16 ***
## InjectionDCZ:portesblue  0.149585   0.105226   1.422    0.155    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 507.290  on 37  degrees of freedom
## Residual deviance:  75.095  on 34  degrees of freedom
## AIC: 300.41
## 
## Number of Fisher Scoring iterations: 4
Summary 25 trials

Summary 25 trials

Summary 25 trials

Summary 25 trials

Homer performs more trials in DCZ compared to sham (when all conditions together). There is only a tendancy when we take main effect for Injection on the length of sessions. We have a significant interaction for Homer, suggesting a lower number of trials in DCZ (shorter sessions) in the blue door condition.

Then we ask whether the last correct trial performed is later in DCZ than in sham: this woul dmean that monkeys have more problems, or take more time, to find all or the max of rewards.

## 
## Call:
## glm(formula = trial_nb ~ Injection * portes, family = "poisson", 
##     data = subset(agg.maxcor, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.09859    0.08671  35.735   <2e-16 ***
## InjectionDCZ             0.18326    0.11052   1.658   0.0973 .  
## portesblue              -0.06777    0.12006  -0.564   0.5725    
## InjectionDCZ:portesblue  0.26886    0.15008   1.792   0.0732 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 57.387  on 29  degrees of freedom
## Residual deviance: 31.863  on 26  degrees of freedom
## AIC: 192.05
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial_nb ~ Injection, family = "poisson", data = subset(agg.maxcor, 
##     singe == "Homer" & portes == "blue"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   3.03082    0.08305  36.496  < 2e-16 ***
## InjectionDCZ  0.45212    0.10154   4.453 8.47e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 43.139  on 15  degrees of freedom
## Residual deviance: 22.429  on 14  degrees of freedom
## AIC: 108.14
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial_nb ~ Injection * portes, family = "poisson", 
##     data = subset(agg.maxcor, singe == "Donut"))
## 
## Coefficients:
##                          Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.352407   0.070711  47.410   <2e-16 ***
## InjectionDCZ            -0.006018   0.092310  -0.065   0.9480    
## portesblue               0.731887   0.081753   8.952   <2e-16 ***
## InjectionDCZ:portesblue  0.212183   0.107004   1.983   0.0474 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 388.396  on 37  degrees of freedom
## Residual deviance:  83.286  on 34  degrees of freedom
## AIC: 305.58
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial_nb ~ Injection, family = "poisson", data = subset(agg.maxcor, 
##     singe == "Donut" & portes == "blue"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   4.08429    0.04103   99.54  < 2e-16 ***
## InjectionDCZ  0.20617    0.05412    3.81 0.000139 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 94.668  on 20  degrees of freedom
## Residual deviance: 80.031  on 19  degrees of freedom
## AIC: 210.11
## 
## Number of Fisher Scoring iterations: 4

Both monkley show a longer time to get to their last reward in session especially in Blue condition (tendency 0.07) for Homer.

This might mean that trials are instead repeats although for Homer there are more trials in sessions overall. So see let’s do the same analysis for Repeats, and then analyze the Repeats altogether

In the first analysis we take only the “blue” sessions because we have very few repeats in “clear”:

## 
## Call:
## glm(formula = trial_nb ~ Injection, family = "poisson", data = subset(agg.maxrpt, 
##     singe == "Homer"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   3.11668    0.07956   39.18  < 2e-16 ***
## InjectionDCZ  0.43229    0.09759    4.43 9.44e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 47.279  on 15  degrees of freedom
## Residual deviance: 26.844  on 14  degrees of freedom
## AIC: 113.71
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial_nb ~ Injection, family = "poisson", data = subset(agg.maxrpt, 
##     singe == "Donut"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   4.27110    0.03737 114.287  < 2e-16 ***
## InjectionDCZ  0.15755    0.04981   3.163  0.00156 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 86.851  on 20  degrees of freedom
## Residual deviance: 76.790  on 19  degrees of freedom
## AIC: 210.32
## 
## Number of Fisher Scoring iterations: 4

Main effect for both monkeys. THe last repeat trial is later in DCZ than sham. Which goes with OR is equivalent to the fact their are more Repeats under DCZ.

Below we can graph the overall trends of choices across sessions. The lines represent the fit to choice patterns (as shown in the very first figures). Positive slopes means searching holes from top to bottom, and negative slopes from bottom to top. The length actually covers the number of trials.

There are changes regarding repeats but these are different for the 2 monkeys. Let’s test statistically the data for All trials, for <25 trials and for >25 trials:

Summary 25 trials

Summary 25 trials

Summary 25 trials

Summary 25 trials

Pause in sessions

Monkeys sometime break during the task for short (code 99) or long (code999) pauses. THere is an interest in looking at the number and distribution of these pauses, as they may reflect laps of attention, distractibility, lack of motivation, etc.

## 
## Call:
## glm(formula = choice ~ Injection * portes, family = "poisson", 
##     data = subset(shortp, singe == "Homer"))
## 
## Coefficients:
##                          Estimate Std. Error z value Pr(>|z|)
## (Intercept)                0.5596     0.3780   1.481    0.139
## Injectionsham              0.5390     0.5563   0.969    0.333
## portesblue                -0.3365     0.5855  -0.575    0.566
## Injectionsham:portesblue  -0.3567     0.9181  -0.389    0.698
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 4.2220  on 11  degrees of freedom
## Residual deviance: 2.0437  on  8  degrees of freedom
## AIC: 38.992
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = choice ~ Injection * portes, family = "poisson", 
##     data = subset(shortp, singe == "Donut"))
## 
## Coefficients:
##                            Estimate Std. Error z value Pr(>|z|)
## (Intercept)               6.931e-01  5.000e-01   1.386    0.166
## Injectionsham            -8.195e-16  6.124e-01   0.000    1.000
## portesblue                1.542e-01  5.455e-01   0.283    0.778
## Injectionsham:portesblue -1.561e-16  7.029e-01   0.000    1.000
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 14.971  on 20  degrees of freedom
## Residual deviance: 14.753  on 17  degrees of freedom
## AIC: 77.082
## 
## Number of Fisher Scoring iterations: 5

Trial types:

First let’s look at the frequency of repeats, miss, etc…

## 
## Call:
## glm(formula = trial ~ choice_type/Injection, family = "poisson", 
##     data = subset(agg.data4B, singe == "Homer" & portes == "clear"))
## 
## Coefficients:
##                                 Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                      2.97893    0.09206  32.359  < 2e-16 ***
## choice_typeMiss                 -2.46810    0.45659  -5.406 6.46e-08 ***
## choice_typeRepeat               -2.46810    0.45658  -5.406 6.46e-08 ***
## choice_typeCorrect:InjectionDCZ  0.14564    0.11819   1.232   0.2179    
## choice_typeMiss:InjectionDCZ     0.35417    0.50262   0.705   0.4810    
## choice_typeRepeat:InjectionDCZ   1.28093    0.60552   2.115   0.0344 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 277.538  on 28  degrees of freedom
## Residual deviance:  11.962  on 23  degrees of freedom
## AIC: 131.73
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ choice_type/Injection, family = "poisson", 
##     data = subset(agg.data4B, singe == "Homer" & portes == "blue"))
## 
## Coefficients:
##                                 Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                      2.59738    0.10314  25.183  < 2e-16 ***
## choice_typeMiss                 -1.43423    0.27044  -5.303 1.14e-07 ***
## choice_typeRepeat               -0.59205    0.17283  -3.426 0.000613 ***
## choice_typeCorrect:InjectionDCZ  0.22927    0.13121   1.747 0.080585 .  
## choice_typeMiss:InjectionDCZ     0.08961    0.31339   0.286 0.774921    
## choice_typeRepeat:InjectionDCZ   0.71742    0.16288   4.405 1.06e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 183.374  on 44  degrees of freedom
## Residual deviance:  39.504  on 39  degrees of freedom
## AIC: 231.19
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ choice_type/Injection, family = "poisson", 
##     data = subset(agg.data4B, singe == "Donut" & portes == "clear"))
## 
## Coefficients:
##                                 Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                      3.17805    0.07715  41.192  < 2e-16 ***
## choice_typeMiss                 -1.75094    0.21436  -8.168 3.13e-16 ***
## choice_typeRepeat               -2.48491    0.41547  -5.981 2.22e-09 ***
## choice_typeCorrect:InjectionDCZ  0.03279    0.09992   0.328    0.743    
## choice_typeMiss:InjectionDCZ    -0.19035    0.26881  -0.708    0.479    
## choice_typeRepeat:InjectionDCZ   0.91629    0.60553   1.513    0.130    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 336.800  on 35  degrees of freedom
## Residual deviance:  12.165  on 30  degrees of freedom
## AIC: 167.51
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ choice_type/Injection, family = "poisson", 
##     data = subset(agg.data4B, singe == "Donut" & portes == "blue"))
## 
## Coefficients:
##                                 Estimate Std. Error z value Pr(>|z|)    
## (Intercept)                      3.13114    0.06608  47.383  < 2e-16 ***
## choice_typeMiss                 -0.60541    0.11121  -5.444 5.21e-08 ***
## choice_typeRepeat                0.42991    0.08490   5.064 4.11e-07 ***
## choice_typeCorrect:InjectionDCZ  0.04312    0.09038   0.477   0.6333    
## choice_typeMiss:InjectionDCZ    -0.23228    0.13105  -1.772   0.0763 .  
## choice_typeRepeat:InjectionDCZ   0.31205    0.06878   4.537 5.72e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 619.69  on 62  degrees of freedom
## Residual deviance: 189.97  on 57  degrees of freedom
## AIC: 507.62
## 
## Number of Fisher Scoring iterations: 4

## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B, singe == "Homer" & choice_type == 
##         "Correct"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              2.97893    0.09206  32.359  < 2e-16 ***
## InjectionDCZ             0.14564    0.11819   1.232  0.21786    
## portesblue              -0.38154    0.13825  -2.760  0.00578 ** 
## InjectionDCZ:portesblue  0.08363    0.17660   0.474  0.63580    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 34.480  on 29  degrees of freedom
## Residual deviance: 14.853  on 26  degrees of freedom
## AIC: 164.34
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ Injection * portes, family = "poisson", 
##     data = subset(agg.data4B, singe == "Donut" & choice_type == 
##         "Correct"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              3.17805    0.07715  41.192   <2e-16 ***
## InjectionDCZ             0.03279    0.09992   0.328    0.743    
## portesblue              -0.04692    0.10158  -0.462    0.644    
## InjectionDCZ:portesblue  0.01033    0.13474   0.077    0.939    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 8.1931  on 37  degrees of freedom
## Residual deviance: 7.4311  on 34  degrees of freedom
## AIC: 205.95
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ Injection, family = "poisson", data = subset(agg.data4B, 
##     singe == "Homer" & choice_type == "Repeat" & portes == "blue"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)    2.0053     0.1387  14.461  < 2e-16 ***
## InjectionDCZ   0.7174     0.1629   4.405 1.06e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 45.125  on 15  degrees of freedom
## Residual deviance: 23.876  on 14  degrees of freedom
## AIC: 95.228
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = trial ~ Injection, family = "poisson", data = subset(agg.data4B, 
##     singe == "Donut" & choice_type == "Repeat" & portes == "blue"))
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   3.56105    0.05330  66.812  < 2e-16 ***
## InjectionDCZ  0.31205    0.06878   4.537 5.72e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 150.70  on 20  degrees of freedom
## Residual deviance: 129.77  on 19  degrees of freedom
## AIC: 248.18
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = Repeat_trials ~ Correct_trials * Injection, data = subset(xyplot_data.first30, 
##     singe == "Homer"))
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)   
## (Intercept)                   2.0479     3.1609   0.648  0.52925   
## Correct_trials                0.4007     0.2316   1.730  0.10922   
## InjectionDCZ                 16.3734     4.6042   3.556  0.00395 **
## Correct_trials:InjectionDCZ  -0.9354     0.3166  -2.954  0.01205 * 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for gaussian family taken to be 2.237345)
## 
##     Null deviance: 78.000  on 15  degrees of freedom
## Residual deviance: 26.848  on 12  degrees of freedom
## AIC: 63.688
## 
## Number of Fisher Scoring iterations: 2
## 
## Call:
## glm(formula = Repeat_trials ~ Correct_trials * Injection, data = subset(xyplot_data.first30, 
##     singe == "Donut"))
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)
## (Intercept)                  -0.8783     3.8222  -0.230    0.821
## Correct_trials                0.4072     0.2743   1.484    0.156
## InjectionDCZ                  8.0527     6.0293   1.336    0.199
## Correct_trials:InjectionDCZ  -0.4886     0.4043  -1.209    0.243
## 
## (Dispersion parameter for gaussian family taken to be 4.824659)
## 
##     Null deviance: 100.667  on 20  degrees of freedom
## Residual deviance:  82.019  on 17  degrees of freedom
## AIC: 98.206
## 
## Number of Fisher Scoring iterations: 2

The statistics show that there is no effect of the condition (Injection) for Homer but there is a significant increase of repeat for Donut. This is when we do not take PORTES into account. If Portes is used as an interacting fixed effect the effect size for Repeats in Donut goes down (p=0.052).

Cumsum Reward

Test whether the speed of getting rewards changes between conditions and under DCZ vs sham.

## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cum$cv[cum$portes == "blue" & cum$singe == "Homer" & cum$Injection == "sham"] and cum$cv[cum$portes == "blue" & cum$singe == "Homer" & cum$Injection == "DCZ"]
## D = 0.41818, p-value = 0.002047
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  cum$cv[cum$portes == "blue" & cum$singe == "Donut" & cum$Injection == "sham"] and cum$cv[cum$portes == "blue" & cum$singe == "Donut" & cum$Injection == "DCZ"]
## D = 0.21, p-value = 0.02431
## alternative hypothesis: two-sided

## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cum30$cv[cum30$portes == "blue" & cum30$singe == "Homer" & cum30$Injection == "sham"] and cum30$cv[cum30$portes == "blue" & cum30$singe == "Homer" & cum30$Injection == "DCZ"]
## D = 0.36143, p-value = 0.01305
## alternative hypothesis: two-sided
## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cum30$cv[cum30$portes == "blue" & cum30$singe == "Donut" & cum30$Injection == "sham"] and cum30$cv[cum30$portes == "blue" & cum30$singe == "Donut" & cum30$Injection == "DCZ"]
## D = 0.1, p-value = 0.9654
## alternative hypothesis: two-sided

There is an impact of DCZ on the speed at which animals get rewards under DCZ compared to Sham, in the opaque condition, not in the clear. Specifically it seems that animals accumulate rewards faster under DCZ.IT is not significant if one takes only the first 50 trials for the 2 animals

One possibility is that it’s because animals make less repeats at the begining thus cumulating rewards faster.

Cumsum repeats

Test whether the repeats appear later using a cumsum curve

## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Homer" & cumdr30$Injection == "sham"] and cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Homer" & cumdr30$Injection == "DCZ"]
## D = 0.40857, p-value = 0.003115
## alternative hypothesis: two-sided
## 
##  Exact two-sample Kolmogorov-Smirnov test
## 
## data:  cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Donut" & cumdr30$Injection == "sham"] and cumdr30$cumrepeats[cumdr30$portes == "blue" & cumdr30$singe == "Donut" & cumdr30$Injection == "DCZ"]
## D = 0.1, p-value = 0.9589
## alternative hypothesis: two-sided

No different.

##stats spatial strategy

## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Homer" & data4B$portes == "clear" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Homer" & data4B$portes == "clear" & data4B$Injection == "sham"]
## D = 0.07337, p-value = 0.7881
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Homer" & data4B$portes == "blue" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Homer" & data4B$portes == "blue" & data4B$Injection == "sham"]
## D = 0.063715, p-value = 0.7768
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Donut" & data4B$portes == "clear" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Donut" & data4B$portes == "clear" & data4B$Injection == "sham"]
## D = 0.075606, p-value = 0.5193
## alternative hypothesis: two-sided
## 
##  Asymptotic two-sample Kolmogorov-Smirnov test
## 
## data:  data4B$distloc[data4B$singe == "Donut" & data4B$portes == "blue" & data4B$Injection == "DCZ"] and data4B$distloc[data4B$singe == "Donut" & data4B$portes == "blue" & data4B$Injection == "sham"]
## D = 0.012805, p-value = 1
## alternative hypothesis: two-sided

There is no difference in distribution of distances between sham and DCZ.

Clusters of small distances

We look at clustering in the sense of succession of choices at small distances.

## 
## Call:
## lm(formula = normdist ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Donut"))
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.79702 -0.13816  0.00995  0.18219  0.59248 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  1.49680    0.11744  12.746 1.67e-14 ***
## portes.xblue                 0.13707    0.15312   0.895    0.377    
## Injection.xDCZ               0.16689    0.15312   1.090    0.283    
## portes.xblue:Injection.xDCZ -0.09747    0.20463  -0.476    0.637    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3107 on 34 degrees of freedom
## Multiple R-squared:  0.05557,    Adjusted R-squared:  -0.02776 
## F-statistic: 0.6668 on 3 and 34 DF,  p-value: 0.5782
## 
## Call:
## lm(formula = normdist ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Homer"))
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.81273 -0.16082 -0.01963  0.13199  0.69782 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  1.57723    0.12261  12.863 8.85e-13 ***
## portes.xblue                 0.15032    0.16709   0.900    0.377    
## Injection.xDCZ              -0.08029    0.16220  -0.495    0.625    
## portes.xblue:Injection.xDCZ  0.06232    0.22185   0.281    0.781    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3003 on 26 degrees of freedom
## Multiple R-squared:  0.1073, Adjusted R-squared:  0.004316 
## F-statistic: 1.042 on 3 and 26 DF,  p-value: 0.3906

## 
## Call:
## glm(formula = clust ~ portes * Injection, family = "poisson", 
##     data = subset(distClustyClust.session, singe == "Donut"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              0.93048    0.23736   3.920 8.85e-05 ***
## portesblue               0.24834    0.29513   0.841    0.400    
## InjectionDCZ             0.04319    0.30677   0.141    0.888    
## portesblue:InjectionDCZ -0.06184    0.39162  -0.158    0.875    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 5.3423  on 37  degrees of freedom
## Residual deviance: 4.0878  on 34  degrees of freedom
## AIC: Inf
## 
## Number of Fisher Scoring iterations: 4
## 
## Call:
## glm(formula = clust ~ portes * Injection, family = "poisson", 
##     data = subset(distClustyClust.session, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              0.87547    0.26352   3.322 0.000893 ***
## portesblue               0.13179    0.34874   0.378 0.705493    
## InjectionDCZ            -0.01223    0.34953  -0.035 0.972094    
## portesblue:InjectionDCZ -0.11073    0.46929  -0.236 0.813465    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 2.0457  on 29  degrees of freedom
## Residual deviance: 1.7967  on 26  degrees of freedom
## AIC: Inf
## 
## Number of Fisher Scoring iterations: 4

## 
## Call:
## lm(formula = normclust ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Donut"))
## 
## Residuals:
##       Min        1Q    Median        3Q       Max 
## -0.089494 -0.018065  0.001053  0.012574  0.133817 
## 
## Coefficients:
##                             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  0.10861    0.01621   6.699 1.08e-07 ***
## portes.xblue                 0.05384    0.02114   2.547   0.0156 *  
## Injection.xDCZ               0.01660    0.02114   0.785   0.4377    
## portes.xblue:Injection.xDCZ -0.02010    0.02825  -0.712   0.4816    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.04289 on 34 degrees of freedom
## Multiple R-squared:  0.2231, Adjusted R-squared:  0.1545 
## F-statistic: 3.254 on 3 and 34 DF,  p-value: 0.03349
## 
## Call:
## lm(formula = normclust ~ portes.x * Injection.x, data = subset(newdata2, 
##     singe.x == "Homer"))
## 
## Residuals:
##       Min        1Q    Median        3Q       Max 
## -0.127614 -0.037525 -0.000705  0.039737  0.105619 
## 
## Coefficients:
##                              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                  0.154979   0.021320   7.269 1.02e-07 ***
## portes.xblue                 0.009766   0.029054   0.336    0.739    
## Injection.xDCZ              -0.039198   0.028204  -1.390    0.176    
## portes.xblue:Injection.xDCZ  0.039104   0.038576   1.014    0.320    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.05222 on 26 degrees of freedom
## Multiple R-squared:  0.1546, Adjusted R-squared:  0.05706 
## F-statistic: 1.585 on 3 and 26 DF,  p-value: 0.217

After counting and normalizing we don’t find any difference in the clustering of distances between choices: no difference in the number of choices Hence the strategy of the animal, in terms of clustering choices with small distances, don’t seem to differ.

Note that surprisingly only Donut shows the effect in which the number of cluster with short distances is larger in Blue than in Clear conditions. Homer does not seem to change his strategy in that regard between the 2…

However, this is surprising because analyses and figures above suggest that under rDCZ in opaque animals get rewards faster and that it could be accompanied by more short distances between 2 successive choices. !! NEED to investigate further!!

Cumsum distance - trajectory

Test whether the speed of getting rewards changes between conditions and under DCZ vs sham.

Post-error reaction vs Reward

Test whether the choices made after no rewards (e.g. next distance of choice: close or far). We hypothesize that in the blue condition (the animal doesn’t see rewards) the distance after negative outcomes (for a repeat) is larger than after a correct rewarded response, because when rewarded the animal will stay ‘in the patch’ i.e. close to where he got the reward.

We remove Clear conditions, and misses because it’s not appropriate.

## 
## Call:
## lm(formula = nextdist ~ portes * choice_type * Injection, data = mean.nextdist)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -13.7283  -1.8922  -0.5644   1.0751  13.0719 
## 
## Coefficients:
##                                           Estimate Std. Error t value Pr(>|t|)
## (Intercept)                                12.3390     0.8097  15.239   <2e-16
## portesblue                                 -2.6884     1.0453  -2.572   0.0109
## choice_typeRepeat                           2.1093     1.8989   1.111   0.2681
## InjectionDCZ                                1.3893     1.0121   1.373   0.1715
## portesblue:choice_typeRepeat               -0.3817     2.1129  -0.181   0.8568
## portesblue:InjectionDCZ                    -0.7645     1.3343  -0.573   0.5674
## choice_typeRepeat:InjectionDCZ             -6.3336     2.6315  -2.407   0.0171
## portesblue:choice_typeRepeat:InjectionDCZ   6.1889     2.8979   2.136   0.0340
##                                              
## (Intercept)                               ***
## portesblue                                *  
## choice_typeRepeat                            
## InjectionDCZ                                 
## portesblue:choice_typeRepeat                 
## portesblue:InjectionDCZ                      
## choice_typeRepeat:InjectionDCZ            *  
## portesblue:choice_typeRepeat:InjectionDCZ *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.435 on 182 degrees of freedom
## Multiple R-squared:  0.1488, Adjusted R-squared:  0.1161 
## F-statistic: 4.545 on 7 and 182 DF,  p-value: 0.0001069
## 
## Call:
## lm(formula = nextdist ~ Injection * portes * choice_type, data = subset(mean.nextdist, 
##     singe == "Donut"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -6.2451 -1.3954 -0.3678  0.8892 10.2686 
## 
## Coefficients:
##                                           Estimate Std. Error t value Pr(>|t|)
## (Intercept)                               12.36261    0.83099  14.877  < 2e-16
## InjectionDCZ                               0.38253    1.04418   0.366  0.71482
## portesblue                                -2.87378    1.03457  -2.778  0.00645
## choice_typeRepeat                          2.17183    2.87862   0.754  0.45219
## InjectionDCZ:portesblue                   -0.55247    1.34736  -0.410  0.68258
## InjectionDCZ:choice_typeRepeat            -4.08364    3.53330  -1.156  0.25031
## portesblue:choice_typeRepeat              -0.06186    3.00766  -0.021  0.98363
## InjectionDCZ:portesblue:choice_typeRepeat  4.22816    3.73287   1.133  0.25983
##                                              
## (Intercept)                               ***
## InjectionDCZ                                 
## portesblue                                ** 
## choice_typeRepeat                            
## InjectionDCZ:portesblue                      
## InjectionDCZ:choice_typeRepeat               
## portesblue:choice_typeRepeat                 
## InjectionDCZ:portesblue:choice_typeRepeat    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.756 on 109 degrees of freedom
## Multiple R-squared:  0.2042, Adjusted R-squared:  0.1531 
## F-statistic: 3.995 on 7 and 109 DF,  p-value: 0.0006309
## 
## Call:
## lm(formula = nextdist ~ Injection * portes * choice_type, data = subset(mean.nextdist, 
##     singe == "Homer"))
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -15.1652  -2.4102  -0.6418   1.8503  11.6349 
## 
## Coefficients:
##                                           Estimate Std. Error t value Pr(>|t|)
## (Intercept)                                 12.302      1.630   7.548 1.85e-10
## InjectionDCZ                                 2.863      2.021   1.416   0.1614
## portesblue                                  -2.189      2.305  -0.950   0.3457
## choice_typeRepeat                            2.118      2.976   0.712   0.4792
## InjectionDCZ:portesblue                     -1.298      2.825  -0.459   0.6475
## InjectionDCZ:choice_typeRepeat              -9.108      4.425  -2.058   0.0436
## portesblue:choice_typeRepeat                -1.404      3.719  -0.377   0.7071
## InjectionDCZ:portesblue:choice_typeRepeat    8.923      5.180   1.722   0.0897
##                                              
## (Intercept)                               ***
## InjectionDCZ                                 
## portesblue                                   
## choice_typeRepeat                            
## InjectionDCZ:portesblue                      
## InjectionDCZ:choice_typeRepeat            *  
## portesblue:choice_typeRepeat                 
## InjectionDCZ:portesblue:choice_typeRepeat .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 4.312 on 65 degrees of freedom
## Multiple R-squared:  0.1486, Adjusted R-squared:  0.05693 
## F-statistic: 1.621 on 7 and 65 DF,  p-value: 0.1453
## 
## Call:
## lm(formula = nextdist ~ Injection * choice_type, data = subset(mean.nextdist, 
##     singe == "Donut" & portes == "blue"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.0988 -1.2658 -0.2679  0.8499 10.1138 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                      9.4888     0.5413  17.529  < 2e-16 ***
## InjectionDCZ                    -0.1699     0.7479  -0.227  0.82083    
## choice_typeRepeat                2.1100     0.7655   2.756  0.00724 ** 
## InjectionDCZ:choice_typeRepeat   0.1445     1.0577   0.137  0.89166    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.421 on 80 degrees of freedom
## Multiple R-squared:  0.1767, Adjusted R-squared:  0.1458 
## F-statistic: 5.724 on 3 and 80 DF,  p-value: 0.001337
## 
## Call:
## lm(formula = nextdist ~ Injection * choice_type, data = subset(mean.nextdist, 
##     singe == "Homer" & portes == "blue"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.1781 -2.3404 -0.7444  1.9236  8.8767 
## 
## Coefficients:
##                                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                     10.1126     1.1708   8.637    5e-11 ***
## InjectionDCZ                     1.5655     1.4179   1.104    0.276    
## choice_typeRepeat                0.7139     1.6032   0.445    0.658    
## InjectionDCZ:choice_typeRepeat  -0.1855     1.9347  -0.096    0.924    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.098 on 44 degrees of freedom
## Multiple R-squared:  0.05919,    Adjusted R-squared:  -0.004961 
## F-statistic: 0.9227 on 3 and 44 DF,  p-value: 0.4378

We observe a post-error effect: after a repeat (negative outcome) the next choice is further away than after a rewarded choice.The effect is however not significant for Homer, and there is no DCZ effect.

Other option is to check whether the probability to do a short vs. long shift depends on the previous reward, using logistic regressions.

Now how does this work with successive trials : does the cumulative outcome (e.g. average outcome in 5 last choices) impact the distance of leave after a negative outcome? The hypothesis is that there should be a threshold to “leave the patch” in terms of average reward encountered. We don’t look first at average values but at successions of rwd: ..010, .0110, 01110, 11110

## 
## Call:
## lm(formula = distNeg ~ Value * Injection, data = subset(MeanValueNeg.nextdist, 
##     singe == "Donut" & Feedback == "Negative"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -5.8347 -1.9579 -0.3966  1.2683 15.2023 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          9.7183     0.8168  11.898  < 2e-16 ***
## Value                3.2705     1.7683   1.850  0.06725 .  
## InjectionDCZ         2.1429     1.1124   1.926  0.05682 .  
## Value:InjectionDCZ  -7.5246     2.3665  -3.180  0.00195 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.258 on 103 degrees of freedom
## Multiple R-squared:  0.1064, Adjusted R-squared:  0.0804 
## F-statistic: 4.089 on 3 and 103 DF,  p-value: 0.008701
## 
## Call:
## lm(formula = distNeg ~ Value * Injection, data = subset(MeanValueNeg.nextdist, 
##     singe == "Homer" & Feedback == "Negative"))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -8.7392 -2.7161 -0.8982  1.6590 20.5108 
## 
## Coefficients:
##                    Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          14.492      2.377   6.097  6.3e-08 ***
## Value                -7.211      4.229  -1.705   0.0929 .  
## InjectionDCZ         -3.241      2.800  -1.158   0.2512    
## Value:InjectionDCZ    8.133      5.130   1.585   0.1177    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 4.889 on 66 degrees of freedom
## Multiple R-squared:  0.05137,    Adjusted R-squared:  0.008255 
## F-statistic: 1.191 on 3 and 66 DF,  p-value: 0.3199

Distance to repeat

## 
## Call:
## glm(formula = d2rpt ~ Injection * portes, family = "poisson", 
##     data = subset(stats.repeat, singe == "Homer"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)               1.4469     0.1715   8.438  < 2e-16 ***
## InjectionDCZ             -0.1407     0.2241  -0.628 0.530242    
## portesblue                0.6145     0.1789   3.435 0.000593 ***
## InjectionDCZ:portesblue   0.3744     0.2315   1.618 0.105765    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 1061.68  on 204  degrees of freedom
## Residual deviance:  965.15  on 201  degrees of freedom
## AIC: 1713.9
## 
## Number of Fisher Scoring iterations: 5
## 
## Call:
## glm(formula = d2rpt ~ Injection * portes, family = "poisson", 
##     data = subset(stats.repeat, singe == "Donut"))
## 
## Coefficients:
##                         Estimate Std. Error z value Pr(>|z|)    
## (Intercept)              2.11626    0.10976  19.280  < 2e-16 ***
## InjectionDCZ            -0.06396    0.14568  -0.439    0.661    
## portesblue               0.60673    0.11062   5.485 4.13e-08 ***
## InjectionDCZ:portesblue  0.10458    0.14673   0.713    0.476    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 6066.4  on 897  degrees of freedom
## Residual deviance: 5954.8  on 894  degrees of freedom
## AIC: 9830.7
## 
## Number of Fisher Scoring iterations: 6
## 
##   Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: glm(formula = d2rpt ~ -1 + BV, family = "poisson", data = subset(d, 
##     singe == "Homer"))
## 
## Linear Hypotheses:
##                             Estimate Std. Error z value Pr(>|z|)    
## DCZ.clear - sham.clear == 0 -0.14067    0.22412  -0.628  0.91347    
## sham.blue - sham.clear == 0  0.61450    0.17889   3.435  0.00283 ** 
## DCZ.blue - sham.clear == 0   0.84823    0.17364   4.885  < 0.001 ***
## sham.blue - DCZ.clear == 0   0.75517    0.15304   4.934  < 0.001 ***
## DCZ.blue - DCZ.clear == 0    0.98890    0.14687   6.733  < 0.001 ***
## DCZ.blue - sham.blue == 0    0.23373    0.05782   4.042  < 0.001 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)
## 
##   Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: glm(formula = d2rpt ~ -1 + BV, family = "poisson", data = subset(d, 
##     singe == "Donut"))
## 
## Linear Hypotheses:
##                             Estimate Std. Error z value Pr(>|z|)    
## DCZ.clear - sham.clear == 0 -0.06396    0.14568  -0.439   0.9661    
## sham.blue - sham.clear == 0  0.60673    0.11062   5.485   <0.001 ***
## DCZ.blue - sham.clear == 0   0.64735    0.11031   5.868   <0.001 ***
## sham.blue - DCZ.clear == 0   0.67070    0.09676   6.932   <0.001 ***
## DCZ.blue - DCZ.clear == 0    0.71131    0.09641   7.378   <0.001 ***
## DCZ.blue - sham.blue == 0    0.04062    0.01755   2.314   0.0757 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)

Regarding distances between a choice and a repeat, DCZ effects are absent in Donut but present in Homer. For Homer, the distances are longer under DCZ in blue conditions but much shorter in DCZ than sham in clear conditions.

2D choices


  1. The first descriptive analyses were in Graph_PST2020choices.R - now in PST2020_DREADDs.rmd]↩︎